OpenAI announces five-fold increase in bug bounty reward

OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
In a blog post confirming the move, the organization set out plans to expand its cybersecurity grant program. So far, the tech giant has given funding to 28 research projects looking at both offensive and defensive security measures, including autonomous cybersecurity defenses, secure code generation, and prompt injection.
The program is now soliciting proposals for five new areas of research: software patching, model privacy, detection and response, security integration, and agentic AI security.
It’s also introducing what it terms microgrants in the form of API credits for “high quality proposals”.
In addition to the expanded grant program, the company also announced it was expanding its security bug bounty program, which was first launched in April 2023.
The primary change is an increase of the maximum bounty award from $20,000 to $100,000, which OpenAI said “reflects our commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems”.
Additionally, it’s launching “limited-time promotions”, the first of which is live now and ends on 30 April 2025. During these periods, researchers can receive additional bounty bonuses for valid work in a specific bug category. More information can be found on OpenAI’s bugcrowd page.
OpenAI is still all-in on AGI
OpenAI has pitched the extended grant program and the increased maximum bug bounty payout as crucial to its “ambitious path towards AGI (artificial general intelligence)”.
AGI is commonly understood to mean AI that has a level of intelligence similar to a human and isn’t constrained by one particular specialism.
It’s also a controversial topic that has divided those in the AI community and beyond into three camps: those who believe its development is inevitable and necessary, those who believe its development could mean the end of civilization, and those who believe it’s both impossible and undesirable.
OpenAI CEO Sam Altman, CEO is firmly in the first camp, stating in a January 2025 post to his personal blog: “We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial.”
More recently, he said “systems that start to point to AGI are coming into view” and laid out how this may play out over the next ten years. He also caveated the statement, however, saying OpenAI “[doesn’t] intend to alter or interpret the definitions and processes that define [its] relationship with Microsoft”.
Altman said the footnote may seem “silly”, but “some journalists will try to get clicks by writing something silly”.
Nevertheless, it’s unsurprising the specter of Microsoft was raised in the context of this blog; Microsoft CEO Satya Nadella has been openly critical of the AI industry’s focus on AGI.
In a recent podcast appearance, Nadella described the industry’s focus on AGI as “nonsensical benchmark hacking”.
MORE FROM ITPRO
Source link